5. General Patterns vs. Individual Measures

Article List
Prior
Next
Contact

Transient & Individual Data lacks Certitude that Science requires

Predictions regarding Batting Averages not guaranteed within any range

The previous article illustrated how the Living Algorithm specializes in characterizing individual moments in a data stream, and how Probability specializes in characterizing universal features of fixed data sets. We’ve also seen how one of Probability’s measures, the batting average, is employed in a predictive fashion. This average provides an estimate concerning future performance that is highly valued by the baseball community. Note that neither Probability’s nor the Living Algorithm’s estimates about a player's performance are guaranteed. The batter could fall into or break out of a slump at any moment. The measures provided by either system only indicate a probable performance, not one that is predetermined. Further, it is impossible to even put a probable range onto the accuracy of either prediction.

Why can’t the scientific community apply their rigorous tools to the batting average? And if the batting average has no scientific value, what scientific significance does the Living Algorithm’s Predictive Cloud have?

Player's Performance too individual and transient to have scientific value

Probability can apply his averages and standard deviations to the growing data set of a player’s ‘at bats’. However, Probability cannot apply his more sophisticated statistical tools that measure the parameters necessary to determine predictive accuracy – the precision of the estimate. Specifically, he cannot apply the standard error of the mean that is necessary to set confidence levels. This is a fatal flaw in any scientific study based in statistical analysis. The careful application of these tools is essential for publication in scientific journals. In short, a baseball player's batting average is too individual and transient to have any scientific value.

The opinion poll

To crystallize these ideas let’s look at an example from our political world. The more thorough polls will survey 2000 random people from the rolls of American voters to make estimates about election outcome. Pollsters then make statements like: “If the election were held right now, 45% of the population is likely to vote for Obama with a range of 5% in either direction.” This is the practical application of Probability's measures. An average (45%) that characterizes the set is highlighted along with the range (5%). The range indicates the confidence limits of the estimate of voter preference.

Can’t generalize Opinion Polls: No scientific value

Although Probability provides precise measures concerning the general features of the data sets of these opinion polls, the results only apply to that moment in history. The qualifying statement, “If the election were held right now”, indicates how tenuous the predictions are. Due to the volatility of political conditions underlying voter preferences, the opinions of the voter set are continually subject to change. Accordingly the intentions of a set of voters at one moment in time can only be loosely compared with the intentions of the set of future voters.  Because of the transient and individual nature of these polls, there have been many notorious examples of voter polls predicting the success of one candidate when the other wins. Due to this lack in predictive accuracy, opinion polls are like the weather report; we pay attention but don’t place too much stock in their predictions due to the constant potential for an abrupt change in conditions. Because it is impossible to generalize the certainty about that data set to any other equivalent data sets, opinion polls have no scientific value.

Possible to establish scientific certitude on the performance of identical cars; humans not identical

Science requires a certain level of certitude combined with the ability to generalize analysis to similar circumstances. Each baseball player’s performance is so individual and transitory that it is impossible to generalize the analysis from one player's set to another or even to the player's future performance with any certainty. This doesn’t take away from the pragmatic predictive value of the batting average. It just means that it is impossible to achieve scientific certitude. In contrast, Probability can generalize results from identical machines. For instance, a company could track the performance of 100s of identical cars (the same make and model) and employ Probability's talents to make well-defined predictions about future performance. Similarly scientists can examine the effects of the identical drug on a significant number of humans in a certain stage of a disease and make some sound scientific estimates about the future performance of the drug. However each player is so unique, the pitchers he is facing are so different, and the psychological pressure of the big game so variable, that it is impossible to have identical humans and circumstances to compare statistics with.

Player’s Data Set too transient to generalize results

Although Probability can accurately characterize a player's data set, he can't apply this analysis to the player's future with any certitude. This is due to the transitory nature of a ball player’s life as he moves through time and circumstance. A ball player’s performance lays in an unknown future. Aging, accidents, and illness are three common features of life that can have an unpredictable and abrupt effect upon the living data stream of a baseball player’s ‘at bats’.

Individuality of Human Data Streams due to Aging & Innate Differences

For instance, nobody would ever attempt to equate the hitting data set of 20-year-old home run hitter Barry Bonds with his hitting data set when he is 40 years old. This is due to life’s inevitable aging process. This means that the data set of his performance must be continually redefined. Because of the necessary redefinition the sets are different and hence aren’t comparable scientifically. Further no one would ever attempt to equate 20-year-old Barry Bonds' hitting data set with another 20 year old’s hitting data set because of inevitable individual differences. In short the hitting data set of 20-year-old Barry Bond can only be equated with itself. Barry Bond ages and no one is quite like him. The data stream characterizes the change inherent to living systems. The individual and transient nature of a human’s data stream renders the analytic tools of Probability’s fixed set mathematics incapable of establishing the certitude that Science requires.

Living Algorithm's Patterns are scientifically significant, not the individual measures.

The Living Algorithm's information patterns influence human behavior.

The same analysis applies to the Living Algorithm’s predictive cloud. Data streams are so individual and transient that it is impossible to achieve the certitude that Science requires. However, the Living Algorithm's predictive clouds supply an abundance of practical information when applied to living data streams, as evidenced in our batting average example. It is a plausible assumption that living systems employ this pragmatic tool, the predictive clouds, for assessing environmental patterns to best determine the most appropriate response to ensure survival. If Life employs the predictive clouds, then Life is also subject to the Living Algorithm's information patterns. In Triple Pulse Studies, the first notebook in this series, we examined many examples of how Life has employed the Triple Pulse, one of the Living Algorithm's information patterns, to organize human behavior regarding sleep. Accordingly, the scientific value of the Living Algorithm System lies in its ability to reveal the underlying information patterns that motivate behavior.

To establish scientific certitude Probability's analytical tools must be applied to Living Algorithm predictions.

However the Living Algorithm System doesn't have the tools to establish the scientific certitude of these connections. Probability’s analytical talents are required to verify, or at least establish the limits on the correspondences between human behavior and the Living Algorithm's information patterns. Once again it seems as if Probability and the Living Algorithm represent complementary systems.

Probability's general statements have scientific value; Living Algorithm's individual statements have none.

As complementary systems, Probability provides a mathematical analysis of the general nature of fixed data sets, while the Living Algorithm provides a mathematical analysis of the individual moments in dynamic data streams. Further due to the fixed and general nature of Probability's analysis of sets, the results can also be generalized with a distinct measure of scientific certitude. In contrast, due to dynamic and individual nature of the Living Algorithm's analysis of moments, the measures that determine the trajectories of individual moments cannot be generalized. Hence the individual measures generated by the Living Algorithm, while possessing great pragmatic value, have no scientific value.

Living Algorithm reveals information patterns that seem to influence human behavior.

While the individual measures of the Living Algorithm have no scientific value, the Living Algorithm's method of digesting information reveals patterns that seem to influence living behavior (Triple Pulse Studies). What is the nature of these patterns? In what manner do they differ from the patterns that Probability reveals?

See how Probability's rise to the top of the subatomic world, sets the stage for the Living Algorithm.

To understand these differences, the next article is a historical investigation of Probability's rise to the top of the subatomic world. Ironically, the story of how Probability became famous as ruler of the subatomic world illustrates both his inherent strengths and weaknesses. Further it pertains to why the Living Algorithm's dynamic nature is ideally suited to determining causality, while Probability's static nature is more suited to description. As with other aspects of these respective systems, their talents are mutually exclusive. Read the next article in the stream – Description vs. Causality; Static vs. Dynamics, to see how Probability was able to patch up the gaps in the subatomic universe that were left by classical Mechanics. Also see how Probability sets the stage for the Living Algorithm's entry onto the scientific stage.

Or perhaps you've tired of this endless exposition. For a fresh allegorical perspective reenter our alternate universe and read Probability's Numbers vs. Living Algorithm Patterns.

 

Home    Article List    Previous    Next    Comments